How to Build a Homelab with Proxmox VE: Run Multiple Servers on One Machine
Most homelab setups start the same way: one service per machine, or a tangle of Docker containers on a single Linux box with no isolation between them. Proxmox VE is the tool that lets you consolidate everything onto one physical server and carve it into as many isolated virtual machines and containers as the hardware can support. Run Home Assistant, Frigate, Pi-hole, a Windows VM, a development server, and a Minecraft server all on the same box — each with its own resources, its own network config, and full snapshot and backup support. This guide covers everything from installation to first VM to accessing your Proxmox dashboard remotely with a Localtonet tunnel.
What Is Proxmox VE and Why Use It?
Proxmox Virtual Environment is a Type-1 hypervisor — it installs directly on bare metal hardware and runs everything else as virtual machines or containers on top of it. Unlike running VirtualBox on Windows or Docker on Ubuntu, Proxmox is the operating system. There is no host OS underneath it taking up resources. The hardware goes straight to Proxmox, and Proxmox carves it up between guests.
It combines two mature virtualization technologies into a single, unified web interface:
Version 9.1 is based on Debian 13 (Trixie) with Linux Kernel 6.17. Key additions: OCI image support for LXC containers (pull Docker Hub images as LXC templates), vTPM state stored in qcow2 (enabling snapshots of Windows VMs with vTPM on NFS storage), improved nested virtualization, major SDN Fabric improvements, and initial Intel TDX and AMD SEV confidential computing support. The April 2026 9.1.8 patch added automatic HA rebalancing, which distributes workloads back evenly when a failed node returns to a cluster.
Hardware Requirements
| Component | Minimum | Recommended for homelab | Notes |
|---|---|---|---|
| CPU | 64-bit with VT-x (Intel) or AMD-V (AMD) | 4+ cores, modern generation (Ryzen 5, Core i5+) | Check BIOS: virtualization extensions must be enabled. Intel VT-d / AMD-Vi required for PCIe passthrough. |
| RAM | 2 GB (barely functional) | 16-32 GB | Each VM needs its own RAM allocation. ZFS needs additional RAM for its ARC cache (1 GB per TB of storage, minimum 8 GB system RAM for ZFS). |
| System disk | 32 GB SSD | 120-240 GB SSD (dedicated) | Do NOT use a USB drive for the Proxmox system disk. Proxmox writes heavily to the system disk (logs, corosync) and USB drives wear out and fail within months. |
| VM/data storage | Any extra disk | NVMe SSD for VMs, HDD for bulk storage | Keep VM storage on a separate disk from the Proxmox OS. ZFS on NVMe gives you snapshots and checksums at near-native performance. |
| Network | 1 Gigabit NIC | 2 NICs (management + VM traffic) | Two NICs let you put management traffic and VM traffic on separate bridges, which is cleaner and more secure. Intel NICs have better Linux driver support than Realtek. |
Good homelab hardware options for Proxmox
For a starter homelab, mini PCs work well: Beelink EQ13 (Intel N100, dual NIC, ~$170), Minisforum UN100 series, or any used business desktop (Dell OptiPlex, HP EliteDesk, Lenovo ThinkCentre). For a more powerful build, any Ryzen 5 or Core i5 machine with 32 GB RAM handles a dozen or more VMs comfortably. Old gaming PCs make excellent Proxmox hosts — the single-core performance matters for VM responsiveness more than core count.
Installing Proxmox VE
Download the ISO and write to USB
Download the Proxmox VE 9.1 ISO from proxmox.com/downloads. Write it to a USB drive using Rufus on Windows (in DD mode, not ISO mode) or dd on Linux/macOS:
# Find your USB device first
lsblk
# Write ISO to USB
sudo dd if=proxmox-ve_9.1-1.iso of=/dev/sdX bs=1M status=progress conv=fsync
Boot from USB and run the installer
Enter your BIOS (usually Del, F2, or F11 on POST) and set the USB drive as the first boot device. Enable virtualization extensions (VT-x, VT-d on Intel / AMD-V, AMD-Vi on AMD) if they are not already enabled. Boot from USB and select "Install Proxmox VE (Graphical)".
Choose your filesystem
The installer asks which filesystem to use for the system disk. Two practical choices:
| Filesystem | Choose if |
|---|---|
| ext4 (default) | You have less than 8 GB RAM, you are new to Proxmox, or you want the simplest setup. Reliable and fast. No snapshots on the system disk itself. |
| ZFS (RAID0 for single disk) | You have 8 GB+ RAM, want checksums and compression on the system disk, or plan to add a second disk later for ZFS mirroring. Better for data integrity. |
Set a static IP address
The installer asks for a network configuration. Set a static IP address — do not use DHCP. If the IP changes after a router reboot, you lose access to the web UI. Choose an IP outside your router's DHCP range (e.g. 192.168.1.10 if your DHCP range starts at 192.168.1.100).
Complete installation and access the web UI
After installation (~10 minutes), the machine reboots into Proxmox. Remove the USB drive. Open a browser on another machine and navigate to:
https://YOUR-PROXMOX-IP:8006
Your browser will warn about a self-signed certificate — this is normal. Proceed anyway. Log in as root with the password you set during installation. The realm dropdown should be Linux PAM.
Post-Install: Setting Up the Free Repository
After first login, Proxmox shows a "No valid subscription" warning and has the enterprise repository configured. The enterprise repository requires a paid subscription (from EUR 115/year per CPU). For a homelab, use the free community repository instead.
# Disable the enterprise repository
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Add the community (no-subscription) repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
> /etc/apt/sources.list.d/pve-no-subscription.list
# Update package lists and upgrade
apt update && apt full-upgrade -y
To remove the "No valid subscription" popup that appears on every login:
sed -Ezi.bak \
"s/(Ext\.Msg\.show\(\{.*?title: gettext\('No valid sub)/void\(\{ \/\/\1/g" \
/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy
The proxmoxlib.js file is replaced when Proxmox packages update, so the popup returns after an update. Rerun the sed command after each major update, or consider using the community post-install script which handles this automatically.
VMs vs LXC Containers: When to Use Each
This is the most important practical decision in Proxmox. Getting it wrong wastes resources or creates unnecessary complexity.
| Attribute | KVM Virtual Machine | LXC Container |
|---|---|---|
| Kernel | Own kernel, fully isolated from host | Shares host kernel — must be Linux |
| RAM overhead | Higher (full OS in memory) | 50-80% less RAM for same workload |
| Start time | 30-90 seconds (full boot) | 1-3 seconds |
| Guest OS | Any OS: Windows, Linux, BSD, macOS | Linux only |
| Disk I/O | Near-native with VirtIO drivers | Native (no virtualization layer) |
| Isolation | Full hardware isolation | Namespace isolation (shared kernel) |
| Snapshots | Full VM snapshots (ZFS or QCOW2) | Supported on ZFS storage |
| GPU passthrough | Full PCIe passthrough supported | Limited GPU access (no full passthrough) |
| Docker inside | Works normally | Works with privileged container or nesting=1 |
Simple rule: use LXC for Linux services, VMs for everything else
Running Pi-hole, Nginx, a database, Home Assistant, or any standard Linux service? Use an LXC container. It starts instantly, uses a fraction of the RAM, and feels exactly like a real Linux server over SSH. Running Windows, a firewall appliance (OPNsense, pfSense), or anything that needs GPU passthrough? Use a VM. For Docker workloads, many experienced homelab users run Docker inside an LXC container with nesting enabled — this gives you lightweight resource usage while keeping Docker's familiar workflow.
Creating Your First VM
Upload an ISO to Proxmox storage
In the web UI: select your node in the left sidebar, click local storage, then ISO Images, then Upload or Download from URL. Download Ubuntu Server 24.04 LTS (or any Linux/Windows ISO you want to install).
Create the VM
Click Create VM in the top right. Work through the wizard:
| Setting | Recommended value | Why |
|---|---|---|
| OS → ISO | Select your uploaded ISO | |
| System → Machine | q35 | Modern machine type with PCIe support |
| System → BIOS | OVMF (UEFI) | Required for Windows 11 and modern Linux |
| Disk → Bus | SCSI with VirtIO SCSI controller | Best disk performance on Linux. Use IDE only for very old OSes. |
| CPU → Type | host | Passes all host CPU flags to the VM. Best performance for single-host setups. Use x86-64-v2 if you plan live migration between different CPU generations. |
| Network → Model | VirtIO (paravirtualized) | Best network performance. Avoid e1000 except for old Windows guests. |
Start the VM and install the OS
Start the VM and click Console to see its display. Install your OS normally through the console. For Windows guests, you will need to load VirtIO drivers during installation — download the virtio-win.iso from the Fedora project and attach it as a second CD-ROM drive before starting the VM.
Install the QEMU guest agent
After OS installation, install the QEMU guest agent inside the VM. This enables Proxmox to show the VM's IP address in the web UI, perform clean graceful shutdowns, and support live migration and consistent snapshots.
sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Then in the Proxmox web UI, go to the VM → Options → QEMU Guest Agent → enable it, and restart the VM.
Creating an LXC Container
Download an LXC template
In the web UI: select your node, click local storage, then CT Templates, then Templates. Download Ubuntu 24.04, Debian 12, or Alpine. Templates are small archives (~100-200 MB) and download in under a minute.
In Proxmox 9.1, you can also use OCI images directly: pull any Docker Hub image as an LXC template for lightweight application containers.
Create the container
Click Create CT. Key settings:
| Setting | Recommended |
|---|---|
| Unprivileged container | Yes (default) — safer, maps root inside container to a non-root user on host |
| Nesting | Enable if you want to run Docker inside this LXC container |
| RAM | 512 MB for a simple service like Pi-hole; 1-2 GB for heavier apps |
| Disk | 8-20 GB for most services (add more for media or databases) |
| Network | Static IP recommended (same reason as the Proxmox host itself) |
Start and access
Start the container and click Console. Or SSH in directly using the static IP you configured. The container behaves like a regular Linux server — install packages with apt, run systemd services, and configure everything the normal way.
Storage: ZFS vs ext4 vs LVM-thin
Storage is where Proxmox's flexibility shows most clearly. You can mix storage types on the same host, using the right option for each use case.
| Storage type | Snapshots | Thin provisioning | RAM requirement | Best for |
|---|---|---|---|---|
| ZFS | ✅ Instant, copy-on-write | ✅ Yes | 8 GB+ recommended | Primary VM storage. Data integrity, compression, deduplication. Excellent on NVMe. |
| LVM-thin | ✅ Yes | ✅ Yes | Low | Good alternative to ZFS when RAM is limited. No checksums. Available on any disk. |
| ext4/dir | ❌ No (backup only) | ❌ No (thick allocation) | None | ISO storage, backups, CT templates. Simplest. No snapshot capability for VMs. |
| Ceph | ✅ Yes | ✅ Yes | High | Multi-node clusters with shared storage. Overkill for single-node homelab. |
To add a ZFS pool for VM storage on a second disk after installation:
# List available disks (identify your second disk, e.g. sdb or nvme1n1)
lsblk
# Create a ZFS pool named "vmdata" on the disk (wipes all data on the disk)
zpool create -f vmdata /dev/sdb
# Verify
zpool status vmdata
Then in the web UI: Datacenter → Storage → Add → ZFS → select the pool. Proxmox automatically makes it available for VM disk images with snapshot support.
Community Helper Scripts: One-Line LXC App Installs
The community-scripts/ProxmoxVE repository maintains helper scripts that install popular self-hosted apps directly into LXC containers with a single command run from the Proxmox host shell. This is one of the most useful resources in the Proxmox homelab community.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/vm/haos-vm.sh)"
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pihole.sh)"
The scripts handle container creation, OS installation, and application setup automatically. They prompt you for resource allocation and create the container with recommended settings. Available scripts cover Home Assistant, Pi-hole, Nextcloud, Jellyfin, Immich, AdGuard Home, Vaultwarden, Frigate, Portainer, Grafana, and dozens more.
Running any script directly from the internet as root carries risk. Before running a community script, open the GitHub URL in your browser and read the script to understand what it does. The community-scripts project is well-maintained and widely used, but the principle of reviewing before running always applies. Also check the project's GitHub issues if you are on Proxmox 9.1, as some scripts are still being updated for 9.1 compatibility after the kernel and networking changes in that release.
Backups: Protecting Your Homelab
Proxmox has built-in backup scheduling. The simplest setup is to back up VMs and containers to local storage on a schedule. Go to Datacenter → Backup → Add, select a storage target, choose your VMs and containers, and set a schedule.
For serious data protection, Proxmox Backup Server (PBS) is the purpose-built companion. PBS runs as a separate installation (free download from proxmox.com) on a second machine, a NAS, or even a VM on another Proxmox host. It provides:
Accessing the Proxmox Web UI Remotely
The Proxmox web UI runs on https://YOUR-IP:8006 on your local network. To access it from outside your home — to manage VMs while travelling, check on a running job, or restart a stuck container from your phone — you need a way to reach port 8006 from the internet.
A Localtonet HTTP tunnel creates a stable public HTTPS URL that forwards to your local port 8006. This works even behind CGNAT with no static IP and no router configuration.
Install Localtonet on the Proxmox host
# Download the Linux x64 binary from localtonet.com/download
chmod +x localtonet
mv localtonet /usr/local/bin/
localtonet authtoken YOUR_TOKEN
Create an HTTP tunnel for port 8006
In the Localtonet dashboard: Protocol = HTTP, Local IP = 127.0.0.1, Local Port = 8006, Subdomain = your choice (e.g. proxmox → proxmox.localto.net).
Install as a system service for auto-start
localtonet --install-service --authtoken YOUR_TOKEN
localtonet --start-service --authtoken YOUR_TOKEN
Open https://proxmox.localto.net from any browser. The Proxmox login page loads exactly as it does on the local network. Log in as root and manage everything normally.
When you access Proxmox via the Localtonet URL, the browser sees the Proxmox self-signed certificate. This is normal and expected. You can safely proceed past the browser warning for your own server. To eliminate the warning permanently, you can configure Proxmox to use a proper TLS certificate from Let's Encrypt, but this requires a domain name and is optional for homelab use.
Essential CLI Commands
| Command | What it does |
|---|---|
qm list |
List all VMs with their status, RAM, and uptime |
qm start <vmid> |
Start a VM by its ID number |
qm shutdown <vmid> |
Gracefully shut down a VM (sends ACPI power button signal) |
qm snapshot <vmid> snap-name |
Take an instant snapshot of a VM (requires ZFS or QCOW2 storage) |
qm rollback <vmid> snap-name |
Roll back a VM to a previous snapshot |
qm config <vmid> |
Show the full hardware configuration of a VM |
pct list |
List all LXC containers with their status |
pct start <ctid> |
Start an LXC container |
pct enter <ctid> |
Get a shell inside a running LXC container (no SSH needed) |
pct snapshot <ctid> snap-name |
Snapshot an LXC container (ZFS storage required) |
pvesh get /nodes |
Query the Proxmox API via CLI — useful for scripting and automation |
pveversion -v |
Show installed versions of all Proxmox packages |
apt update && apt full-upgrade |
Update Proxmox to the latest version in the configured repository |
Troubleshooting Common Problems
| Problem | Cause | Fix |
|---|---|---|
| Cannot reach web UI at :8006 after install | Firewall, wrong IP, or network misconfiguration during setup | Log into the Proxmox console directly. Run hostname -I to confirm the IP. Check systemctl status pveproxy. Verify the management interface is connected to your network. |
| "No valid subscription" popup on every login | Enterprise repo configured, no subscription key | Disable enterprise repo, add no-subscription repo, run the proxmoxlib.js sed command shown above. |
| VM won't start: "KVM virtualization is not available" | VT-x/AMD-V not enabled in BIOS, or running Proxmox inside a VM | Reboot into BIOS and enable Intel VT-x (and VT-d for passthrough) or AMD-V. Proxmox is a Type-1 hypervisor and must run on bare metal — it cannot run inside VirtualBox or VMware. |
| Windows VM very slow disk performance | Using IDE disk controller instead of VirtIO SCSI | Inside Windows, install the VirtIO storage driver from the virtio-win ISO, then change the disk bus to SCSI with VirtIO SCSI controller in VM settings. Significant performance improvement. |
| LXC container fails to start after Proxmox update | Known issue with the no-subscription repo: LXC packages sometimes receive updates that break specific configurations | Check journalctl -u pve-guests and pct start <ctid> output. If NFS mounts or specific kernel features broke, check the Proxmox forum for the specific error. Pinning the LXC package version is a temporary workaround. |
| ZFS pool not showing in web UI after adding | Pool created but not added to Proxmox storage | Go to Datacenter → Storage → Add → ZFS and select the pool. Or add via CLI: pvesm add zfspool vmdata --pool vmdata --content images,rootdir |
| Snapshot option greyed out for a VM | VM disk is on storage type that does not support snapshots (directory, LVM without thin) | Move the VM disk to ZFS or LVM-thin storage. In web UI: VM → Hardware → Hard Disk → Move Disk → select ZFS storage target. |
Frequently Asked Questions
Can I run Proxmox on a used server from eBay?
Yes, and this is one of the most cost-effective homelab setups. Used enterprise servers from eBay (Dell PowerEdge R720, HP ProLiant DL380 Gen9, etc.) offer 128+ GB RAM and multiple CPUs for a fraction of new hardware cost. The trade-off is power consumption: a 1U server can use 200-400 watts at load versus 15-30 watts for a mini PC. For a low-power homelab that runs 24/7, a mini PC or workstation is more economical. For maximum RAM and CPU at the lowest upfront cost, a used rack server is hard to beat.
Do I need a Proxmox subscription for home use?
No. Proxmox VE is fully functional without a subscription. The no-subscription community repository provides updates, bug fixes, and new features — it is slightly behind the enterprise repository and not officially recommended for production use, but it works reliably for most homelab setups. The subscription (starting at EUR 115/year per CPU) provides access to the enterprise-tested repository and commercial support. For a home server, the community repository is the right choice.
How is Proxmox different from just running Docker on Linux?
Docker on Linux gives you containerized applications that share one Linux install. All containers share the same kernel and the same OS. Proxmox gives you full isolation: each VM or LXC container is genuinely separate. A broken service in one container cannot crash another. You can run completely different Linux distributions in different containers. You can run Windows alongside Linux. You get proper snapshot and backup support for the entire system state, not just application data. You can roll back a broken update in 30 seconds. For a single application, Docker is simpler. For a homelab running a dozen services, Proxmox is the more robust and manageable foundation.
Can I run Proxmox on a single disk system?
Yes, though it is not ideal. The Proxmox installer partitions the disk for the OS, and you can use the remaining space for VM storage. On a single disk, you typically partition with ext4 for the OS (LVM) and add a dir-type storage or LVM-thin pool for VMs. You lose snapshot support without ZFS, and losing the disk means losing both the OS and all VMs at once. A common upgrade path is to add a second SSD later for ZFS VM storage while keeping the original disk for the Proxmox OS.
Manage Your Proxmox Homelab from Anywhere
A Localtonet HTTP tunnel gives your Proxmox web UI a stable public HTTPS URL. Start and stop VMs, check on running jobs, and manage your entire homelab from your phone, no matter where you are.
Create Your Free Tunnel →